Campinas
Building a Cognitive Twin Using a Distributed Cognitive System and an Evolution Strategy
Gibaut, Wandemberg, Gudwin, Ricardo
Approximately at the same time, based on the ideas This work proposes an approach that uses an evolutionary presented by Newell, Rosenbloom and Laird (1989), Laird algorithm along traditional Machine Learning methods released early versions of the SOAR cognitive architecture to build a digital, distributed cognitive agent capable of (Laird and Rosenbloom, 1996; Laird, 2012). By the end of emulating the potential actions (input-output behavior) of the 1990s, a large group of researchers involved in the Simulation a user while allowing further analysis and experimentation of Adaptive Behavior shaped the concept of Cognitive - at a certain level - of its internal structures. We focus Architecture as an essential set of structures and processes on the usage of simple devices and the automation of this necessary for the generation of a computational, cognitive building process, rather than manually designing the agent.
Evaluation of GlassNet for physics-informed machine learning of glass stability and glass-forming ability
Allec, Sarah I., Lu, Xiaonan, Cassar, Daniel R., Nguyen, Xuan T., Hegde, Vinay I., Mahadevan, Thiruvillamalai, Peterson, Miroslava, Du, Jincheng, Riley, Brian J., Vienna, John D., Saal, James E.
Glasses form the basis of many modern applications and also hold great potential for future medical and environmental applications. However, their structural complexity and large composition space make design and optimization challenging for certain applications. Of particular importance for glass processing is an estimate of a given composition's glass-forming ability (GFA). However, there remain many open questions regarding the physical mechanisms of glass formation, especially in oxide glasses. It is apparent that a proxy for GFA would be highly useful in glass processing and design, but identifying such a surrogate property has proven itself to be difficult. Here, we explore the application of an open-source pre-trained NN model, GlassNet, that can predict the characteristic temperatures necessary to compute glass stability (GS) and assess the feasibility of using these physics-informed ML (PIML)-predicted GS parameters to estimate GFA. In doing so, we track the uncertainties at each step of the computation - from the original ML prediction errors, to the compounding of errors during GS estimation, and finally to the final estimation of GFA. While GlassNet exhibits reasonable accuracy on all individual properties, we observe a large compounding of error in the combination of these individual predictions for the prediction of GS, finding that random forest models offer similar accuracy to GlassNet. We also breakdown the ML performance on different glass families and find that the error in GS prediction is correlated with the error in crystallization peak temperature prediction. Lastly, we utilize this finding to assess the relationship between top-performing GS parameters and GFA for two ternary glass systems: sodium borosilicate and sodium iron phosphate glasses. We conclude that to obtain true ML predictive capability of GFA, significantly more data needs to be collected.
INACIA: Integrating Large Language Models in Brazilian Audit Courts: Opportunities and Challenges
Pereira, Jayr, Assumpcao, Andre, Trecenti, Julio, Airosa, Luiz, Lente, Caio, Cléto, Jhonatan, Dobins, Guilherme, Nogueira, Rodrigo, Mitchell, Luis, Lotufo, Roberto
This paper introduces INACIA (Instru\c{c}\~ao Assistida com Intelig\^encia Artificial), a groundbreaking system designed to integrate Large Language Models (LLMs) into the operational framework of Brazilian Federal Court of Accounts (TCU). The system automates various stages of case analysis, including basic information extraction, admissibility examination, Periculum in mora and Fumus boni iuris analyses, and recommendations generation. Through a series of experiments, we demonstrate INACIA's potential in extracting relevant information from case documents, evaluating its legal plausibility, and formulating propositions for judicial decision-making. Utilizing a validation dataset alongside LLMs, our evaluation methodology presents an innovative approach to assessing system performance, correlating highly with human judgment. The results highlight INACIA's proficiency in handling complex legal tasks, indicating its suitability for augmenting efficiency and judicial fairness within legal systems. The paper also discusses potential enhancements and future applications, positioning INACIA as a model for worldwide AI integration in legal domains.
The Age of Synthetic Realities: Challenges and Opportunities
Cardenuto, João Phillipe, Yang, Jing, Padilha, Rafael, Wan, Renjie, Moreira, Daniel, Li, Haoliang, Wang, Shiqi, Andaló, Fernanda, Marcel, Sébastien, Rocha, Anderson
Synthetic realities are digital creations or augmentations that are contextually generated through the use of Artificial Intelligence (AI) methods, leveraging extensive amounts of data to construct new narratives or realities, regardless of the intent to deceive. In this paper, we delve into the concept of synthetic realities and their implications for Digital Forensics and society at large within the rapidly advancing field of AI. We highlight the crucial need for the development of forensic techniques capable of identifying harmful synthetic creations and distinguishing them from reality. This is especially important in scenarios involving the creation and dissemination of fake news, disinformation, and misinformation. Our focus extends to various forms of media, such as images, videos, audio, and text, as we examine how synthetic realities are crafted and explore approaches to detecting these malicious creations. Additionally, we shed light on the key research challenges that lie ahead in this area. This study is of paramount importance due to the rapid progress of AI generative techniques and their impact on the fundamental principles of Forensic Science.
Incremental procedural and sensorimotor learning in cognitive humanoid robots
Rossi, Leonardo de Lellis, Berto, Leticia Mara, Rohmer, Eric, Costa, Paula Paro, Gudwin, Ricardo Ribeiro, Colombini, Esther Luna, Simoes, Alexandre da Silva
The ability to automatically learn movements and behaviors of increasing complexity is a long-term goal in autonomous systems. Indeed, this is a very complex problem that involves understanding how knowledge is acquired and reused by humans as well as proposing mechanisms that allow artificial agents to reuse previous knowledge. Inspired by Jean Piaget's theory's first three sensorimotor substages, this work presents a cognitive agent based on CONAIM (Conscious Attention-Based Integrated Model) that can learn procedures incrementally. Throughout the paper, we show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent. Experiments were conducted with a humanoid robot in a simulated environment modeled with the Cognitive Systems Toolkit (CST) performing an object tracking task. The system is modeled using a single procedural learning mechanism based on Reinforcement Learning. The increasing agent's cognitive complexity is managed by adding new terms to the reward function for each learning phase. Results show that this approach is capable of solving complex tasks incrementally.
An In-Depth Study on Open-Set Camera Model Identification
Júnior, Pedro Ribeiro Mendes, Bondi, Luca, Bestagini, Paolo, Tubaro, Stefano, Rocha, Anderson
Camera model identification refers to the problem of linking a picture to the camera model used to shoot it. As this might be an enabling factor in different forensic applications to single out possible suspects (e.g., detecting the author of child abuse or terrorist propaganda material), many accurate camera model attribution methods have been developed in the literature. One of their main drawbacks, however, is the typical closed-set assumption of the problem. This means that an investigated photograph is always assigned to one camera model within a set of known ones present during investigation, i.e., training time, and the fact that the picture can come from a completely unrelated camera model during actual testing is usually ignored. Under realistic conditions, it is not possible to assume that every picture under analysis belongs to one of the available camera models. To deal with this issue, in this paper, we present the first in-depth study on the possibility of solving the camera model identification problem in open-set scenarios. Given a photograph, we aim at detecting whether it comes from one of the known camera models of interest or from an unknown device. We compare different feature extraction algorithms and classifiers specially targeting open-set recognition. We also evaluate possible open-set training protocols that can be applied along with any open-set classifier. More specifically, we evaluate one training protocol targeted for open-set classifiers with deep features. We observe that a simpler version of those training protocols works with similar results to the one that requires extra data, which can be useful in many applications in which deep features are employed. Thorough testing on independent datasets shows that it is possible to leverage a recently proposed convolutional neural network as feature extractor paired with a properly trained open-set classifier...
RIn-Close_CVC2: an even more efficient enumerative algorithm for biclustering of numerical datasets
Veroneze, Rosana, Von Zuben, Fernando J.
Rosana Veroneze is a postdoctoral researcher at the Department of Computer Engineering and Industrial Automation, School of Electrical and Computer Engineering, University of Campinas (Unicamp). Her research interests include computational intelligence, data mining and machine learning areas. Fernando J. Von Zuben is a Full Professor at the Department of Computer Engineering and Industrial Automation, School of Electrical and Computer Engineering, University of Campinas (Unicamp). The main topics of his research are computational intelligence, bioinspired computing, multivariate data analysis, and machine learning.